Goto

Collaborating Authors

 detect uk benefit fraud


Revealed: bias found in AI system used to detect UK benefits fraud

The Guardian

An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people's age, disability, marital status and nationality, the Guardian can reveal. An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud. The admission was made in documents released under the Freedom of Information Act by the Department for Work and Pensions (DWP). The "statistically significant outcome disparity" emerged in a "fairness analysis" of the automated system for universal credit advances carried out in February this year. The emergence of the bias comes after the DWP this summer claimed the AI system "does not present any immediate concerns of discrimination, unfair treatment or detrimental impact on customers". This assurance came in part because the final decision on whether a person gets a welfare payment is still made by a human, and officials believe the continued use of the system – which is attempting to help cut an estimated 8bn a year lost in fraud and error – is "reasonable and proportionate".